2,951 research outputs found
Correlation between Spin Polarization and Magnetic Moment in Ferromagnetic Alloys
The correlation between the magnetic moment in ferromagnetic alloys and the
tunneling spin polarization in ferromagnet-insulator-superconductor tunneling
experiments has been a mystery. The measured spin polarization for Fe, Co, Ni,
and various Ni alloys is positive and roughly proportional to their magnetic
moments, which can not be explained by considering the net density of states.
Using a tight-binding coherent potential approximation (CPA) model, we show
that while the polarization of the net density of states is not correlated with
the magnetic moment, the polarization of the density of states of {\it s}
electrons is correlated with the magnetic moment in the same manner as observed
by the tunneling experiments.
We also discuss the spin polarization measurements by Andreev reflection
experiments, some of which obtained different results from the tunneling
experiments and our calculations.Comment: 8 RevTEX pages, 9 figures in ep
Integrating Existing Software Toolkits into VO System
Virtual Observatory (VO) is a collection of interoperating data archives and
software tools. Taking advantages of the latest information technologies, it
aims to provide a data-intensively online research environment for astronomers
all around the world.
A large number of high-qualified astronomical software packages and libraries
are powerful and easy of use, and have been widely used by astronomers for many
years. Integrating those toolkits into the VO system is a necessary and
important task for the VO developers.
VO architecture greatly depends on Grid and Web services, consequently the
general VO integration route is "Java Ready - Grid Ready - VO Ready". In the
paper, we discuss the importance of VO integration for existing toolkits and
discuss the possible solutions. We introduce two efforts in the field from
China-VO project, "gImageMagick" and " Galactic abundance gradients statistical
research under grid environment". We also discuss what additional work should
be done to convert Grid service to VO service.Comment: 9 pages, 3 figures, will be published in SPIE 2004 conference
proceeding
Attention, Please! Adversarial Defense via Attention Rectification and Preservation
This study provides a new understanding of the adversarial attack problem by
examining the correlation between adversarial attack and visual attention
change. In particular, we observed that: (1) images with incomplete attention
regions are more vulnerable to adversarial attacks; and (2) successful
adversarial attacks lead to deviated and scattered attention map. Accordingly,
an attention-based adversarial defense framework is designed to simultaneously
rectify the attention map for prediction and preserve the attention area
between adversarial and original images. The problem of adding iteratively
attacked samples is also discussed in the context of visual attention change.
We hope the attention-related data analysis and defense solution in this study
will shed some light on the mechanism behind the adversarial attack and also
facilitate future adversarial defense/attack model design
Investigating on Through Glass via Based RF Passives for 3-D Integration
Due to low dielectric loss and low cost, glass is developed as a promising material for advanced interposers in 2.5-D and 3-D integration. In this paper, through glass vias (TGVs) are used to implement inductors for minimal footprint and large quality factor. Based on the proposed physical structure, the impact of various process and design parameters on the electrical characteristics of TGV inductors is investigated with 3-D electromagnetic simulator HFSS. It is observed that TGV inductors have identical inductance and larger quality factor in comparison with their through silicon via counterparts. Using TGV inductors and parallel plate capacitors, a compact 3-D band-pass filter (BPF) is designed and analyzed. Compared with some reported BPFs, the proposed TGV-based circuit has an ultra-compact size and excellent filtering performance
Pre-training also Transfers Non-Robustness
Pre-training has enabled state-of-the-art results on many tasks. In spite of
its recognized contribution to generalization, we observed in this study that
pre-training also transfers adversarial non-robustness from pre-trained model
into fine-tuned model in the downstream tasks. Using image classification as an
example, we first conducted experiments on various datasets and network
backbones to uncover the adversarial non-robustness in fine-tuned model.
Further analysis was conducted on examining the learned knowledge of fine-tuned
model and standard model, and revealed that the reason leading to the
non-robustness is the non-robust features transferred from pre-trained model.
Finally, we analyzed the preference for feature learning of the pre-trained
model, explored the factors influencing robustness, and introduced a simple
robust pre-traning solution
Variational sparse inverse Cholesky approximation for latent Gaussian processes via double Kullback-Leibler minimization
To achieve scalable and accurate inference for latent Gaussian processes, we
propose a variational approximation based on a family of Gaussian distributions
whose covariance matrices have sparse inverse Cholesky (SIC) factors. We
combine this variational approximation of the posterior with a similar and
efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
We then focus on a particular SIC ordering and nearest-neighbor-based sparsity
pattern resulting in highly accurate prior and posterior approximations. For
this setting, our variational approximation can be computed via stochastic
gradient descent in polylogarithmic time per iteration. We provide numerical
comparisons showing that the proposed double-Kullback-Leibler-optimal
Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate
for stationary kernels than alternative approaches such as inducing-point and
mean-field approximations at similar computational complexity.Comment: Accepted at the 2023 International Conference on Machine Learning
(ICML). 18 pages with references and appendices, 14 figure
- …